55 research outputs found

    Implicit sampling for path integral control, Monte Carlo localization, and SLAM

    Get PDF
    The applicability and usefulness of implicit sampling in stochastic optimal control, stochastic localization, and simultaneous localization and mapping (SLAM), is explored; implicit sampling is a recently-developed variationally-enhanced sampling method. The theory is illustrated with examples, and it is found that implicit sampling is significantly more efficient than current Monte Carlo methods in test problems for all three applications

    Small-noise analysis and symmetrization of implicit Monte Carlo samplers

    Full text link
    Implicit samplers are algorithms for producing independent, weighted samples from multi-variate probability distributions. These are often applied in Bayesian data assimilation algorithms. We use Laplace asymptotic expansions to analyze two implicit samplers in the small noise regime. Our analysis suggests a symmetrization of the algo- rithms that leads to improved (implicit) sampling schemes at a rel- atively small additional cost. Computational experiments confirm the theory and show that symmetrization is effective for small noise sampling problems

    Symmetrized importance samplers for stochastic differential equations

    Get PDF
    We study a class of importance sampling methods for stochastic differential equations (SDEs). A small-noise analysis is performed, and the results suggest that a simple symmetrization procedure can significantly improve the performance of our importance sampling schemes when the noise is not too large. We demonstrate that this is indeed the case for a number of linear and nonlinear examples. Potential applications, e.g., data assimilation, are discussed.Comment: Added brief discussion of Hamilton-Jacobi equation. Also made various minor corrections. To appear in Communciations in Applied Mathematics and Computational Scienc

    Data assimilation: mathematics for merging models and data

    Get PDF
    When you describe a physical process, for example, the weather on Earth, or an engineered system, such as a self-driving car, you typically have two sources of information. The first is a mathematical model, and the second is information obtained by collecting data. To make the best predictions for the weather, or most effectively operate the self-driving car, you want to use both sources of information. Data assimilation describes the mathematical, numerical and computational framework for doing just that

    Localization for MCMC: sampling high-dimensional posterior distributions with local structure

    Get PDF
    We investigate how ideas from covariance localization in numerical weather prediction can be used in Markov chain Monte Carlo (MCMC) sampling of high-dimensional posterior distributions arising in Bayesian inverse problems. To localize an inverse problem is to enforce an anticipated "local" structure by (i) neglecting small off-diagonal elements of the prior precision and covariance matrices; and (ii) restricting the influence of observations to their neighborhood. For linear problems we can specify the conditions under which posterior moments of the localized problem are close to those of the original problem. We explain physical interpretations of our assumptions about local structure and discuss the notion of high dimensionality in local problems, which is different from the usual notion of high dimensionality in function space MCMC. The Gibbs sampler is a natural choice of MCMC algorithm for localized inverse problems and we demonstrate that its convergence rate is independent of dimension for localized linear problems. Nonlinear problems can also be tackled efficiently by localization and, as a simple illustration of these ideas, we present a localized Metropolis-within-Gibbs sampler. Several linear and nonlinear numerical examples illustrate localization in the context of MCMC samplers for inverse problems.Comment: 33 pages, 5 figure

    Gaussian approximations in filters and smoothers for data assimilation

    Get PDF
    We present mathematical arguments and experimental evidence that suggest that Gaussian approximations of posterior distributions are appropriate even if the physical system under consideration is nonlinear. The reason for this is a regularizing effect of the observations that can turn multi-modal prior distributions into nearly Gaussian posterior distributions. This has important ramifications on data assimilation (DA) algorithms in numerical weather prediction because the various algorithms (ensemble Kalman filters/smoothers, variational methods, particle filters (PF)/smoothers (PS)) apply Gaussian approximations to different distributions, which leads to different approximate posterior distributions, and, subsequently, different degrees of error in their representation of the true posterior distribution. In particular, we explain that, in problems with medium' nonlinearity, (i) smoothers and variational methods tend to outperform ensemble Kalman filters; (ii) smoothers can be as accurate as PF, but may require fewer ensemble members; (iii) localization of PFs can introduce errors that are more severe than errors due to Gaussian approximations. In problems with strong' nonlinearity, posterior distributions are not amenable to Gaussian approximation. This happens, e.g. when posterior distributions are multi-modal. PFs can be used on these problems, but the required ensemble size is expected to be large (hundreds to thousands), even if the PFs are localized. Moreover, the usual indicators of performance (small root mean square error and comparable spread) may not be useful in strongly nonlinear problems. We arrive at these conclusions using a combination of theoretical considerations and a suite of numerical DA experiments with low- and high-dimensional nonlinear models in which we can control the nonlinearity.Office of Naval Research [N00173-17-2-C003, PE-0601153N]; Alfred P. Sloan Research Fellowship; National Science Foundation [DMS-1619630]Open access journalThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]

    Parameter estimation by implicit sampling

    Full text link
    Implicit sampling is a weighted sampling method that is used in data assimilation, where one sequentially updates estimates of the state of a stochastic model based on a stream of noisy or incomplete data. Here we describe how to use implicit sampling in parameter estimation problems, where the goal is to find parameters of a numerical model, e.g.~a partial differential equation (PDE), such that the output of the numerical model is compatible with (noisy) data. We use the Bayesian approach to parameter estimation, in which a posterior probability density describes the probability of the parameter conditioned on data and compute an empirical estimate of this posterior with implicit sampling. Our approach generates independent samples, so that some of the practical difficulties one encounters with Markov Chain Monte Carlo methods, e.g.~burn-in time or correlations among dependent samples, are avoided. We describe a new implementation of implicit sampling for parameter estimation problems that makes use of multiple grids (coarse to fine) and BFGS optimization coupled to adjoint equations for the required gradient calculations. The implementation is "dimension independent", in the sense that a well-defined finite dimensional subspace is sampled as the mesh used for discretization of the PDE is refined. We illustrate the algorithm with an example where we estimate a diffusion coefficient in an elliptic equation from sparse and noisy pressure measurements. In the example, dimension\slash mesh-independence is achieved via Karhunen-Lo\`{e}ve expansions
    corecore